![]() DEVICE AND METHOD FOR GENERATING AN IMAGE SIGNAL AND DEVICE AND METHOD FOR PROCESSING AN IMAGE SIGNA
专利摘要:
apparatus and method for generating an image signal, apparatus and method for processing an image signal and image signal. an apparatus generates an image signal in which the pixels are encoded in terms of n bits which encode at least one luma per pixel. a receiver (201) obtains high dynamically varying pixel values, according to a first color representation, in terms of m input input bits. a first generator (203) includes the high dynamically varying pixel values in the image signal in terms of n bits, in accordance with a second color representation. a second generator (205) includes, in the image signal, an indicator that high dynamic range pixel values are encoded. in some examples, high dynamic range pixel values may be provided in a segment which may alternatively contain high or low dynamic range pixel values, and the indicator may indicate what type of data is included. the approach can, for example, facilitate the introduction of high dynamic range capability, for example in hdmi systems. 公开号:BR112013028556B1 申请号:R112013028556-7 申请日:2012-04-27 公开日:2022-01-04 发明作者:Philip Steven Newton;Wiebe De Haan 申请人:Koninklijke Philips N.V.; IPC主号:
专利说明:
FIELD OF THE INVENTION [001] The invention relates to the generation and/or processing of an image signal comprising pixel values of high dynamic variation. HISTORY OF THE INVENTION [002] The digital coding of various source signals has become highly important over the last few decades, as digital signal representation and communication have increasingly replaced analog representation and communication. Ongoing research and development is ongoing on how to improve the quality that can be obtained from encoded images and video sequences, while at the same time keeping the data rate at acceptable levels. [003] An important factor for perceived image quality is the dynamic variation that can be reproduced when an image is displayed. However, conventionally, the dynamic variation of the reproduced images tended to be substantially reduced in relation to normal vision. Indeed, the luminance levels found in the real world range in dynamic range as wide as 14 orders of magnitude, ranging from a moonless night to looking directly at the sun. The dynamic variation of instantaneous luminance and the corresponding human visual system response can exist between 10,000:1 and 100,000:1 on sunny days or at night. [004] Traditionally, the dynamic variation of image sensors and displays has been limited to dynamic variations of smaller magnitude. Also, screens are often limited by the viewing environment (they may turn black if the luminance generating mechanism is turned off, but then still reflect, for example, ambient light on their front glass; a television in sunny day view may have DR < 50:1). Consequently, traditionally, it is possible to store and transmit images in gamma-encoded, 8-bit formats, without introducing perceptually observable artifacts in traditional representation devices. However, in an effort for more accurate recording and livelier images, innovative High Dynamic Range (HDR) image sensors that are capable of recording dynamic variations of more than 6 orders of magnitude have been developed. In addition, more special effects, computer graphics enhancement and other post-production work are already routinely conducted, at higher bit rates and with greater dynamic ranges. [005] In addition, the contrast and maximum luminance of prior art display systems continue to increase. Recently, new displays have been introduced with a maximum luminance as high as 4000 Cd/m-2 and contrast ratios of up to perhaps 5-6 orders of magnitude, although this is typically reduced significantly lower in life viewing environments. real. Future displays are expected to be able to provide even greater dynamic variations and, specifically, higher maximum luminances and contrast ratios. When 8-bit, traditionally encoded signals are displayed on these screens, troublesome quantization and clipping artifacts may appear or the gray values of different regions may be represented incorrectly, etc. Artifacts may be particularly observable if compression, such as DCT compression, according to an MPEG standard, or still image or video compression, is used at some point along the image formation chain, starting from content creation. for the final presentation. Furthermore, traditional video formats offer insufficient tolerance and accuracy to convey the rich information contained in new HDR images. [006] As a result, there is a growth needed for new approaches, which allow a consumer to fully benefit from the capabilities of prior art (and future) sensor and display systems. Preferably, representations of this additional information are backwards compatible so that older equipment can still receive common video streams, although newer HDR-enabled devices can take full advantage of the traditional information transmitted by the new format. Thus, it is desirable that encoded video data not only represent HDR images, but also allow encoding of the corresponding traditional Low Dynamic Range (LDR) images that can also be displayed on conventional equipment. [007] An important issue for introducing video and images of high dynamic range is how to effectively code and distribute the associated information. In particular, it is desirable that backward compatibility be maintained and that the introduction of high dynamic range images into existing systems be facilitated. Also, the efficiency in terms of data rate and processing complexity is significant. Another important issue is, in fact, the resulting image quality. [008] With that, an improved approach to distribute, communicate and/or represent images of high dynamic variation would be advantageous. SUMMARY OF THE INVENTION [009] Likewise, the Invention preferably aims to lessen, alleviate or eliminate one or more of the disadvantages with the prior art alone or in any combination. [010] According to one aspect of the invention, an apparatus is provided for generating an image signal, in which pixels are encoded in terms of N bits, encoding at least one luma per pixel, the apparatus comprising: a receiver for obtaining high dynamic range pixel values, according to the first color representation in terms of M bits; a first generator for including the high dynamically varying pixel values in the image signal in terms of N bits, in accordance with a second color representation; and a second generator for including in the image signal an indicator that high dynamic range pixel values are encoded. [011] Note that pixel value terms may be encoded in separate data sections, eg components, or, some sections may comprise non-High Dynamic Range (HDR) data per se. [012] The invention can provide an enhanced image signal to distribute High Dynamic Range (HDR) image data. The approach may, in particular, provide improved backwards compatibility in many applications and/or may, for example, facilitate the introduction of HDR image distribution into existing imaging and video systems. [013] The image signal may be a single image signal such as a digital image file or may, for example, be a video signal comprising a plurality of images. [014] N-bit terms can comprise a plurality of components that can represent, separately and individually, different color components. An N-bit term can represent a plurality of color components. The N-bit term can be divided into different segments that can be allocated to individual color components. For example, bits N1 can be used for pixel data for a first color component, bits N2 can be used for pixel data for a second color component, and bits N3 can be used for pixel data for a third component of color. color (where, for example, N1+N2+N3=N). As a specific example, an RGB color representation can be provided in terms of N bits, where N/3 bits are allocated to each of the color components R, G and B. [015] Similarly, the terms of M input bits can comprise a plurality of components that can, separately and individually, represent different color components. An M-bit term can represent a plurality of color components. The M-bit term can be divided into different segments that can be allocated to individual color components. For example, M1 bits can be used for pixel data for a first color component, M2 bits can be used for pixel data for a second color component, and M3 bits can be used for pixel data for a third color component ( where, for example, M1+M2+M3=M). [016] The image signal can be a single continuous, all-inclusive image signal. However, in other embodiments, the image signal may be a composite or split image signal. For example, the pixel data for the image in the form of N-bit terms can be distributed in several data packets or messages. Similarly, the indicator may be provided alongside or separate from the N-bit terms, for example stored in a different part of a memory, or provided over a different communication link. For example, the indicator may be transmitted in data packets or messages other than N-bit terms. For example, the image signal can be divided into image data packets and the control data packets with the N-bit terms being provided in the former and the indicator being provided in the latter type of data packets. Minimally, the encoder and decoder would have a fixed way (single or multiple) of encoding HDR data into a term of N bits available, fixed, and then the indicator would be simple, and suffice it to say that the encoded data represents “data from HDR” or, for example, “HDR type 1” or “HDR type 2” data instead of LDR data. The receiving side would then, according to, for example, the agreed upon type 2 encoding scenario, would know how to ultimately transform this into the signal to be represented on a screen (or printer, etc.). This scenario can be used, for example, when arbitrary HDR originals (e.g. 16-bit lumas with maximum code corresponding to peak_white 5000 nit, or 22-bit lumas with peak_white 550000 nit) are first transformed into a intermediate signal (which is more usable for display, since a 550000 nit bright object cannot be represented at all, so it is preferentially first sorted to some value that still transmits huge brightness, still, can be represented in a screen, for example, 5000 nit). The difficult math or artistic choices of converting a real-world scene representation into a useful, representable signal are then taken from this part of the imaging chain and treated as in a previous part, so that type 2 encoding should only deal with converting what has ended up in the Mbits representation, intermediate to the N-bits representation of type 2. However, the indicator may be more complex or, otherwise stated, provided along with additional data that specifies how exactly one mapping has been made to the N-bit signal, so that, for example, also the 22-bit/550000 nit originals can be directly applied to the second part of the imaging string and converted to the N-bit signal. In such cases, useful information would be (linear) scaling information (e.g., associated with a scaling between a first range of luminances associated with the first encoding of input M-bit terms versus a second range of N-bit terms), such as , e.g. a 550000 nit level specification (or an indication of derived white value, e.g. an estimated, scaled, or intended to be represented at a reference display white level [what you can see as a example of an associated display luminance], which an actual receiving screen can then optimally map according to what it can maximally output as absolute white; that is, will present data that had a coded white level of, e.g. 5000 nits, unlike data with a white level of 50000 nits, e.g. if the screen can present an absolute white of 10000 nits, it can present the first white [i.e. pixels having code value of Y=1023, for example] as the display output luminance equal to 6000 nit, and the second as the display output luminance equal to 10000 nit). And it may be more useful to include information on how exactly all luma or color values along the codable color range in the M-bit representation are distributed over the codable range of the N-bit signal, e.g. to utilize the bits in the new N-bit representation as well as possible and to encode as accurately as possible all the textures of the various important objects along the luminance range in the portraits inserted in the M-bit representation, for example, when encoding together the mapping functions. In fact, all of these can dynamically vary between different scenes in a movie, for example switching between a well-lit, indoor, flat scene, which can be best represented with LDR encodings, and an outdoor scene with spectacularly bright fireworks, which can be better represented with a variant tuned for more HDR, with different image statistics, resulting in different N-bit encoding statistics. [017] According to an optional aspect of the invention, the first color representation is different from the second color representation. [018] This can provide improved performance in many realizations and can, in many scenarios, specifically allow highly efficient communication of HDR image data. The apparatus may adapt the HDR image data to specifically match the requirements, characteristics and/or preferences of the specific delivery medium. [019] In accordance with an optional aspect of the invention, the apparatus further comprises a transformation unit for transforming high dynamically varying pixel values from the first color representation to the second color representation. [020] This can provide improved performance in many realizations and can, in many scenarios, specifically enable highly efficient communication of HDR image data. The apparatus may adapt the HDR image data to specifically match the requirements, characteristics and/or preferences of the specific delivery medium. [021] According to an optional aspect of the invention, the transformation comprises a compression of the terms of M input bits in terms of N bits, where M is greater than N. [022] A more efficient image signal for distributing HDR content can be achieved in many realizations. A compression that allows for more efficient distribution can, for example, apply nonlinear transformations to transform, for example, a linear, M-bit term color representation into a nonlinear, N-bit term color representation. [023] According to a further aspect of the invention, the compression comprises using a different quantization scheme for pixel values according to the second color representation than for pixel values according to the first color representation. [024] A more efficient image signal for distributing HDR content can be achieved in many realizations. The quantization scheme for the second color representation may, for example, allow the dynamic variation to be covered by fewer quantization levels and may allow N to be less than M. The quantization scheme for the second color representation may, for example, example, be a non-uniform quantification of dynamically varying color and/or luminance component values. [025] According to a further aspect of the invention, the first color representation is the same as the second color representation. [026] This can allow efficient representation and/or low complexity and/or easier operation in many scenarios. In particular, it can allow low complexity and low processing computational resources to be used to efficiently handle high dynamic variation images. [027] According to a further aspect of the invention, the indicator comprises an indication of a display luminance associated with the second color representation. [028] The image signal may include an indication of how the provided pixel values are nominally correlated to the intended luminances. The approach may, for example, allow a screen receiving the image signal to adapt the representation of pixel values to match actual screen characteristics. For example, transformations can be applied to provide accurate and proper conversions of the nominal or reference displays associated with the second color representation to the actual display used for the representation. [029] The indicator can specifically provide an indication of a reference luminance corresponding to a reference pixel value. For example, the luminance corresponding to the pixel value representing the highest luminance of the second color representation can be indicated by the indicator. [030] The approach can allow any HDR space to be encoded, while allowing it to be displayed on any screen. For example, an HDR image can be encoded to match a dynamic range with a brighter radiation of 50,000 nits. However, when representing this signal on a 1000 screen, it is desirable to provide an intelligent mapping between the encoded dynamic variation and the representation dynamic variation. This transformation can be enhanced and/or facilitated by the indicator indicating a display luminance associated with the second color representation. [031] According to a further aspect of the invention, the indicator comprises an indication of the second color representation. [032] This may improve performance and/or facilitate representation. In particular, it can allow a device receiving the image signal to optimize its processing to the specific color representation used. Color representations can specify either how data values are packaged (e.g. first a luma, then a hue as a 3-bit component, then a saturation, according to some bit-allocation mapping of successive term), and what they mean (which primaries, etc.) [033] In accordance with a further aspect of the invention, the first color representation employs a separate color value for each color component of the first color representation, and the second color representation employs a set of color values for each component. color of the second color representation along with a common exponential factor. [034] This can provide a particularly efficient representation. The set of color values for each color component of the second color representation can correspond to a linear or non-linear representation (such as a logarithmic) of the color component luminance values. [035] According to a further aspect of the invention, the image signal comprises a segment for pixel image data, and the first generator is arranged to alternatively include the low dynamically varying pixel values or the dynamically varying pixel values according to the second color representation in the segment, and the indicator is arranged to indicate whether the first segment comprises low dynamically varying color values or high dynamically varying color values. [036] This can provide a particularly advantageous representation. In many scenarios, it can provide improved backwards compatibility and/or facilitate the introduction of HDR into existing systems or standards. The approach may, in particular, allow existing video distribution approaches for low dynamic range image distribution to be readily adapted to high dynamic range image distribution. [037] The segment may, for example, be a segment reserved for communication of color-enhanced data. For example, an image signal standard may allow image data to be communicated according to a standard color representation and according to an enhanced color representation, where the enhanced color representation allows for improved chromaticity representation over the enhanced color representation. standard color (for example, a more accurate chromaticity quantification or a wider color gamut). Typically, the enhanced color representation may use more bits than the standard color representation. The approach may allow a segment reserved for the enhanced color representation to be used for high dynamic variation data communication. [038] In accordance with a further aspect of the invention, the second generator is arranged to further include a second indicator in the image signals, the second indicator being indicative of the segment that is used for low dynamically varying pixel values as much as the segment understand low dynamic range pixel values such as when the segment comprises high dynamic range pixel values. [039] This can provide a particularly advantageous representation. In many scenarios, it can provide improved backwards compatibility and/or facilitate the introduction of HDR into existing systems or standards. The approach may, in particular, allow existing video distribution approaches for low dynamic range image distribution to be readily adapted to enable high dynamic range image distribution. [040] The use of the second indicator, which may indicate that the segment uses data of low dynamic variation even when it contains data of high dynamic variation, can be used to ensure that the processing or distribution based on this indicator is the same as for the low dynamic variation data. This can avoid conflicts and, in particular, can allow the functionality of not capable of processing high dynamic range data or the first indicator still processing the signal. Other functionality can then exploit the first indicator to process the pixel values as high dynamically varying data. For example, in some realizations, only the representation screen can use the first indicator to process the pixel data, whereas the intervention of the distribution or storage functionality is based only on the second indicator and, therefore, does not need to be able to processing of the first indicator or, of course, pixel values of high dynamic variation. The second indicator can be an existing standardized indicator, with the first indicator being a new indicator introduced into an existing standard. [041] According to a further aspect of the invention, several K bits reserved for each pixel in the segment are greater than N. [042] This may allow for improved and/or easier operation in many scenarios. In some embodiments, the K-N bits may be used to communicate other data, such as chromaticity enhancement data. [043] According to a further aspect of the invention, the image encoding signal is in accordance with an HDMI standard. [044] The invention can provide an HDMITM (High Definition Multimedia Interface) image signal particularly advantageous for distribution, in accordance with HDMITM standards. [045] According to a further aspect of the invention, the first generator is arranged to include high dynamically varying pixel values in a Deep Color data segment. [046] This can provide a particularly advantageous approach and can, in particular, allow for improved backwards compatibility. [047] According to a further aspect of the invention, the second generator is arranged to include the indicator in a Video Auxiliary Information InfoFrame. [048] This can provide a particularly advantageous approach and can, in particular, allow for improved backwards compatibility. [049] According to a further aspect of the invention, the image encoding signal is in accordance with a DisplayPort standard. [050] The invention can provide a particularly advantageous DisplayPortTM image signal for distribution, in accordance with DisplayPortTM standards. [051] In accordance with an aspect of the invention, there is provided a method of generating an image signal in which pixels are encoded in terms of N bits, encoding at least one luma per pixel, the method comprising the steps of: obtaining of high dynamically varying pixel values, according to a first color representation in terms of M input input bits; including the high dynamic range pixel values in the image signal in terms of N bits, in accordance with a second color representation; and including in the image signal an indicator that high dynamic range pixel values are encoded. [052] According to one aspect of the invention, there is provided an apparatus for processing an image signal, the apparatus comprising: a receiver for receiving the image signal, a data segment of the image signal comprising one of the pixel values of high dynamic range in terms of N bits, according to a first color representation, and low dynamic range pixel values, according to a second color representation, and to receive an indicator indicative of whether the data segment comprises the high dynamic range pixel values or low dynamic range pixel values; an extractor to extract data from the data segment; and a processor arranged to process the data of the data segment as high dynamically varying pixel values or as indicator dependent low dynamically varying pixel values. [053] According to a further aspect of the invention, the image signal is in accordance with an HDMI standard, and the apparatus further comprises means for transmitting an indication of the ability to process high dynamically varying pixel values into a block of data. HDMI vendor specific. [054] This can allow particularly advantageous image signal distribution. In particular, it can provide improved backwards compatibility and/or easier input of HDR information into HDMI systems. [055] According to an aspect of the invention, there is provided a method of processing an image signal, the method comprising: receiving the image signal, a data segment of the image signal comprising one of the high variance pixel values dynamics in terms of N bits, according to a first color representation, and low dynamically varying pixel values, according to a second color representation; receiving an indicator indicative of whether the data segment comprises the high dynamic range pixel values or the low dynamic range pixel values; extracting data from the data segment; and processing the data from the data segment as either high dynamic range pixel values or low dynamic range pixel values, depending on the indicator. [056] In accordance with one aspect of the invention, an image signal is provided in which pixels are encoded in terms of N bits, encoding at least one luma per pixel, the image signal comprising: the high variance pixel values dynamics in the image signal in terms of N bits, according to a color representation; and an indicator that high dynamic range pixel values are encoded. [057] These and other aspects, features and advantages of the invention will be apparent from and elucidated with reference to the embodiment(s) hereinafter described. BRIEF DESCRIPTION OF THE DRAWINGS [058] The embodiments of the invention will be described, by way of example only, with reference to the drawings, in which [059] FIGURE 1 is an illustration of a distribution path for audiovisual content; [060] FIGURE 2 is an illustration of an apparatus for generating an image signal, in accordance with some embodiments of the invention; [061] FIGURE 3 is an illustration of an apparatus for generating an image signal, in accordance with some embodiments of the invention; [062] FIGURE 4 is an illustration of an apparatus for processing an image signal, in accordance with some embodiments of the invention; [063] FIGURE 5 illustrates examples of encoding pixel values; [064] FIGURE 6 illustrates an example of a system to generate audiovisual content; and [065] FIGURE 7 illustrates an example of a system for processing audiovisual content. DETAILED DESCRIPTION OF SOME EMBODIMENTS OF THE INVENTION [066] FIGURE 1 illustrates an example of an audiovisual distribution path. In the example, a content providing apparatus 101 generates an audiovisual content signal for an audiovisual content item, such as a movie, a television program, etc. The content provider apparatus 101 can specifically encode the audiovisual content according to a suitable encoding format and color representation. In particular, the content provider apparatus 101 can encode the images of a video sequence of the audiovisual content item, according to a suitable representation, such as, for example, YCrCb. The content provider apparatus 101 can be considered to represent a production and distribution generation that creates and broadcasts the content. [067] The audiovisual content signal is then distributed to a content processing device 103 via a distribution path 105. The content processing device 103 may, for example, be a set-top box decoder residing with a specific consumer of the content item. [068] Audiovisual content is encoded and distributed from the content provider device 101 through a medium, which may, for example, consist of commercial media (DVD or BD etc.), Internet or broadcast. It then reaches a source device in the form of content processing device 103 which comprises functionality for decoding and playing the content. [069] It will be appreciated that the distribution path 105 may be any distribution path and by any means or using any suitable communication standard. Also, the distribution path does not have to be real-time, but can include permanent or temporary storage. For example, the distribution path may include the Internet, satellite or terrestrial broadcasting, etc., storage on physically distributed media such as DVDs or Blu-ray DiscTM or a memory card, etc. Likewise, the content processing device 103 can be any suitable device, such as a Blu-ray™ Player, a satellite or terrestrial television receiver, etc. [070] The content processing device 103 is coupled to a screen 107 via a communication path 109. The content processing device generates a display signal comprising an audiovisual signal representing the audiovisual content item. The display signal may specifically be the same as the audiovisual content signal. Thus, the source device sends the decoded content to a collecting device, which can be a TV or other device that converts the digital signals into a physical representation. [071] In some embodiments, the data representing the images of the audiovisual content are the same for the audiovisual content signal and for the display signal. In this example, the display 107 may comprise functionality for image processing, including, for example, generating images with high dynamic range. However, it will be appreciated that, in some embodiments, the content processing device 103 for performing, for example, image enhancement or signal processing algorithms on the data, can specifically decode and re-encode the audiovisual signal ( processed). The new encoding can specifically be for a different encoding or representation format than for the audiovisual content signal. [072] The system of FIGURE 1 is arranged to provide High Dynamic Range (HDR) video information. Also, in order to provide, for example, improved backwards compatibility, it may also, in some scenarios, provide Low Dynamic Range (LDR) information that allows an LDR image to be displayed. Specifically, the system is able to communicate/distribute image signals referring to both LDR and HDR images. [073] The approach described below can be applied to one or both of the link 105 of the content providing device 101 to the content processing device 103 and the link 109 of the content processing device 103 to the screen 107. the approach can be applied differently in the two ways, for example when using different color representations or encoding standards. However, the following description will focus, for summary and clarity, on applying the approach to an interface between an audiovisual set-top box decoder and a corresponding screen. Thus, the description will focus on an application to the communication path 109 between the content processing device 103 and the screen 107 in FIGURE 1. [074] Conventional screens typically use an LDR representation. Typically, these representations are provided by a three-component, 8-bit representation related to the specified primaries. For example, an RGB color representation can be provided by three 8-bit samples, referred to as a primary of Green, Green and Blue, respectfully. Another representation uses a luma component and two chroma components (like YCrCb). These LDR representations correspond to a given variation in brightness or luma. [075] However, increasingly, image capture devices are provided that can capture larger dynamic variations. For example, cameras typically provide 12-bit, 14-bit, or even 16-bit variations. So, compared to a standard, conventional, 8-bit LDR camera, an HDR camera can reliably (linearly) capture 12-bit, 14-bit (or more) ranging from a brighter white to a certain black. Thus, HDR can correspond to an increasing number of bits for the data samples corresponding to LDR, thus allowing a greater dynamic variation to be represented. [076] HDR specifically allows significantly brighter images (or image areas) to be displayed. Of course, an HDR image can provide substantially brighter white than can be provided by the corresponding LDR image. Certainly, an HDR image can allow at least four times whiter white than the LDR image. Brightness can specifically be measured against the darkest black that can be represented, or it can be measured against a certain level of gray or black. [077] The LDR image can specifically match specific display parameters such as a fixed bit resolution related to a specific set of primaries and/or a specific white point. For example, 8 bits can be provided to a certain set of RGB primaries and, for example, a white point of 500 Cd/m2. The HDR image is an image that includes data that must be represented above these restrictions. In particular, a glow can be more than four times brighter than the white point (eg 2000 Cd/m2) or more. [078] High dynamic range pixel values have a luminance contrast range (brightest luminance in the pixel set, divided by the darkest luminance) that is (much) greater than a range that can be reliably displayed on screens standardized in the NTSC and MPEG-2 era (with their typical RGB primaries, and a D65 white for the maximum trigger level [255, 255, 255] a reference brightness of, say, 500 nit or below). Typically, for this 8-bit reference screen to be sufficient to display all gray values between approximately 500 nit and approximately 0.5 nit (i.e. with a contrast range of 1000:1 or below) in visually small steps, while HDR images are encoded with a higher bit term, eg 10-bit (which is also captured by a much higher intensity camera and DAC, eg 14-bit). In particular, HDR images typically contain many pixel values (of brightness image objects) above a scene white. In particular, several pixels are brighter than 2 times a scene white. This scene white can typically be equated with the NTSC/MPEG-2 reference screen white. [079] The number of bits used for HDR images X may typically be greater than or equal to the number of bits Y used for LDR images (X may typically be, for example, 10 or 12, or 14 bits (per channel of color if several of the channels are used), and Y can have, for example, 8, or 10). A transform/mapping may be required to adjust pixels to a smaller range, for example a compressive scaling. Typically, a non-linear transformation may be involved, for example, logarithmic encoding may encode (like lumas) a much larger luminance change in an X-bit term than linear encoding, so that the luminance difference steps of a value to the next are not, then, equidistant, but it is not necessary for them to be so for the human visual system. [080] FIGURE 2 illustrates an apparatus for generating an image signal. In the image signal, pixels are encoded in terms of N bits with at least one luma being encoded per pixel. N-bit terms may comprise a plurality of individual components. For example, an N-bit term representing a pixel can be divided into several sections, each containing several bits that represent a property for the pixel. For example, N-bit terms can be divided into several sections, each containing a pixel value component corresponding to a primary color. For example, an N-bit term might provide a pixel value of RGB by one section including bits that provide the pixel value of R, another section including bits that provide the pixel value of G, and a third section including bits that provide the pixel value of G. provide the pixel value of B. [081] N-bit terms representing HDR pixel values are provided according to the color representation. It will be appreciated that any suitable color representation that allows HDR pixels to be represented may be used, including, for example, an RGB or YCrCb color representation. It will also be appreciated that multiple-primary color representations utilizing more than three primaries may be used. [082] It will be appreciated that the apparatus may be used at any suitable location in the distribution path from the generation of image content to the representation of image content. However, the following description will focus on one embodiment, where the apparatus is implemented as part of the content processing device 103 of FIGURE 1. [083] The apparatus comprises a receiver 201 that receives pixel values of high dynamic variation, according to a first color representation in terms of M input input bits. Receiver 201 can specifically receive an image signal comprising pixel values for an HDR image. The signal may be received from any suitable external or internal source, but in the specific example, the signal is received by the content processing device 103 of the content provider apparatus 101. [084] Similar to the N-bit terms generated by the apparatus of FIGURE 2, the received input M-bit terms may also comprise a plurality of individual components. For example, an M-bit term representing a pixel can be provided in several sections, each containing several bits that represent a property for the pixel. For example, M-bit input terms can be divided into several sections, each containing a pixel value component corresponding to a primary color. For example, an M-bit term may provide a pixel value of RGB by one section including bits that provide the value of pixel R, another section including bits that provide the value of pixel G, and a third section including bits that provide the value of pixel B. [085] Also, M-bit terms of input that provide HDR pixel values are provided, according to a first color representation. It will be appreciated that any suitable color representation that allows HDR pixels to be represented may be used, including, for example, an RGB or YCrCb color representation. It will also be appreciated that multi-primary color representations using more than three primaries may be used. For summary and clarity, the following description will focus on an input signal comprising HDR pixel values as terms of M input bits, according to an RGB color representation. [086] In some embodiments, the color representation of the input (input) M-bit terms (the first color representation) and the color representation of the N-bit (output) terms terms (the second color representation) may be the same and, in fact, N may be equal to M. Thus, in some embodiments, the same color representation may be used for the image (output) signal as for the received image (input) signal. [087] In the example of FIGURE 2, the receiver 201 is coupled to a first generator 203 which is arranged to include N-bit terms in the image signal. In the specific example, this is done by generating data packets that include the N-bit terms for the image. Still, in the example, the color representations and term extension for the input and output signals are the same and thus the first generator 203 can directly include the received input M bits terms that represent the pixel values of HDR in the output image signal, for example, by directly generating data packets or suitable segments comprising the values of M bits. [088] The receiver 201 is further coupled to a second generator 205 which is arranged to generate and include an indicator in the image signal which indicates that HDR pixel values are encoded in the image signal. Thus, an indicator is provided as part of the image signal, indicating that the signal comprises HDR values. The indicator may, for example, be included in the image signal by being included in a data message or data packet distributed along with data messages or data packets comprising the pixel value data. [089] The first and second generators 203, 205 are coupled to an output unit 207 which is arranged to output the image signal. In the specific example, the output unit 207 may simply transmit the messages or data packets containing the pixel value data and the indicator. [090] Thus, in the specific example, the image signal is a composite or divided signal consisting of several independently communicated parts. In the specific example, the image signal comprises a plurality of different types of data packets. However, in other embodiments, the image signal may be provided as a single combined data stream comprising both the pixel value and indicator data. In these examples, the data provided by the first and second generators 203, 205 may be combined into a single data or bit stream by the output unit 207. Specifically, the output unit 207 may comprise a multiplexer for multiplexing the data into a single data stream or file. The apparatus of FIGURE 2 generates an image signal that can not only contain an efficient representation of HDR image data, but also provide flexible HDR distribution and communication. In particular, it may provide improved backwards compatibility and, for example, may allow or facilitate the introduction of HDR images into systems and standards not originally designed for HDR images. For example, it may allow suitably capable equipment (such as displays) to process the image signal as appropriate for HDR data, and thus conditional processing of received pixel values based on the presence or absence of an HDR indication may be achieved. [092] In the example of FIGURE 2, the color representation of the input signal is the same as the color representation of the output signal and, of course, the received HDR samples are directly included in the image signal. However, in many applications, the first color representation will be different from the second color representation. [093] FIGURE 3 illustrates the apparatus of FIGURE 2, modified to include a transform processor 301 between the receiver 201 and the first generator 203. The transform processor 301 is arranged to transform the high dynamically varying pixel values of the first color representation to the second color representation. [094] The transform processor 301 can be specifically arranged to perform a compression of the representation of HDR pixel values so that the number of bits required is reduced. Thus, in many scenarios, the transform unit is arranged to transform the terms of M input bits of input into terms of N bits of output, where M is greater than N. Thus, the transform processor 301 can typically be arranged to generate a more compact representation of HDR pixel values, thereby allowing for a reduced data rate. [095] The transformation can, in a specific way, include a non-linear representation of dynamic variations. For example, the input signal can be received as samples according to a linear, 16-bit RGB color representation. So the input terms can be 48-bit input terms. This representation tends to provide a very accurate representation of dynamic variation and reduce tonal shift etc. within acceptable limits, even for relatively high dynamic variations. However, the need for 48 bits per pixel results in a relatively high data rate that is inappropriate or undesirable for many applications. [096] Transformation processor 301 may similarly process 48-bit terms to provide a more efficient representation. This approach can typically utilize the perceptual features of the human visual system. A characteristic of human vision is that sensitivity to variations in brightness tends to be non-linear. In fact, the increase in luminance that is required for a human to perceive an increase (or decrease) in brightness increases to increase luminance. Likewise, larger steps can be used for greater luminances than for luminances, and likewise, the 301 transform processor can, in many embodiments, convert the linear M-bit representations to a non-linear N-bit representation. In many scenarios, a proper transformation can be achieved by applying a logarithmic function to pixel values. [097] The transformation may, in some embodiments, be implemented as or include a change in the quantification scheme used for pixel values. A quantification scheme can provide the relationship between actual pixel values and the corresponding light radiated from the screen (or from a nominal display). In particular, the quantification scheme can provide the correlations between bit values and a corresponding value of a complete dynamic range. [098] For example, a given display range can be normalized to the range 0-1, where 0 corresponds to minimum light being radiated and 1 corresponds to maximum light being radiated. A simple, linear, uniform quantization scheme can simply divide the 0-1 range into equal sized quantization intervals. For example, for a 12-bit representation, the 0-1 range is divided into 4096 equal steps. [099] The transform processor 301 can change the input quantization scheme applied to the components of the M-bit term to a different quantization scheme that is applied to the outputs of the N-bit terms. [0100] For example, the input quantization of 65336 steps for each color component can be converted to 1024 steps. However, instead of merely using a corresponding linear quantization, the transform processor 103 can apply a non-linear quantization scheme, in which, specifically, the size of the quantization steps increases to increase the bit values (corresponding to the output of increased light). The non-uniform and non-linear representation reflects human perception and can thus, in many cases, allow a reduced number of bits to provide a perceived image to be of the same quality as the larger number of bits generated by a uniform and linear quantization. [0101] Changing the quantification schemes can, in principle, be performed by dequantifying the input in terms of M input bits, followed by quantification in terms of N bits. However, in many scenarios, the transform processor 103 can simply convert the terms by applying suitable bit operations directly to the M input bits and, in particular, by providing a non-linear mapping of the 16 bits of each color component of 10-bit input of the corresponding output color component. [0102] In some embodiments, the transformation processor 301 may individually and separately transform each component of the input M-bit terms into a corresponding component of the N-bit terms. For example, an M-bit term can contain a sample of pixel R, a sample of pixel G and a sample of pixel B for an RGB color representation and this can be converted to a sample of pixel R, a sample of pixel G and a pixel B sample for an RGB color representation of the N-bit term, where samples R, G and B are allocated to different bits of the N-bit terms. [0103] However, particularly advantageous performance can generally be achieved by N-bit terms comprising both individual sections for each component, as well as a common section representing a common component for the individual components of the N-bit term. [0104] Specifically, a separate color value can be provided for each color component of the color representation of the input M-bit terms. Thus, the M-bit input terms can only be provided as separate color samples, as, for example, in an RGB representation. However, the color representation of N-bit terms may include a separate value for each color component (as for the R, G, and B component), but may, in addition, provide a common exponential factor for all color components. . Thus, the N-bit representation can comprise four sections, with three sections providing an individual sample value for the individual color components and a fourth section providing a common exponential factor for all color values. As a specific example, the transform unit 201 can convert from an M-bit RGB representation to an N-bit RGBE representation in order to provide a more efficient representation of HDR pixel values. It will be appreciated that, in some embodiments, the transform processor 301 may be arranged to perform more complex processing and may, for example, map from one color representation to another, taking into account image characteristics, display characteristics, etc. [0105] For example, the apparatus may be arranged to select from a range of color representations to encode the HDR samples and may select one that is most suitable for the current image data. For example, for a sequence of images having a very high dynamic variation, a non-linear representation (e.g. logarithmic) can be used, whereas for a sequence of image variations having a smaller dynamic variation, a linear representation can be used. used. The apparatus may, in such embodiments, be further arranged to include an indication of the selected encoding in the image signal (e.g., an indication of a tone mapping function or gamma function etc.). Thus, the indicator can indicate the specific color representation used for the N-bit terms of the image signal. [0106] It will be appreciated that any suitable mapping between the input M-bit terms and the N-bit terms (and thus between the first and second color representations) can be used without departing from the invention. [0107] In some embodiments, the indicator may comprise an indication of a display luminance associated with the color representation of N-bit terms. For example, an indication of the luminance covered by the color coding range can be provided. [0108] As a specific example, the second color representation can be associated with a reference or nominal display. The reference display may correspond to a certain maximum luminance and the indication may, for example, indicate that the brightest code (e.g. 1023 for a 10-bit representation) is intended to correspond to a luminance of, say, 50,000 nits . This makes it possible, for example, to include a representation classified differently and a smart receiving device can take this into account. [0109] This indication can, in a receiver, be used to adapt the received HDR pixel samples to the specific display representation luminance. In fact, in many cases, it will be more advantageous to map HDR pixel samples to trigger values for the display, taking into account the absolute luminance variation that can be provided. [0110] For example, if HDR pixel values are merely provided as relative values for a normalized dynamic range (ie 0 to 1), the screen will typically represent the image using the corresponding fractional luminances. For example, a pixel value of 0.5 will be represented as half the maximum light output. However, for HDR content and/or HDR displays, such an approach may not be ideal. For example, an image of a beach with the sun in the sky and some dark areas can use full dynamic range to provide a very bright (luminous) sun when displayed on a 50,000 nit screen. Due to the large dynamic range, this is possible, while still providing a bright (but darker) beach and sky, and while still providing shadow area detail. However, if the same image is displayed on a 10,000 nit screen, a simple linear scaling will result in the sky and beach being represented at much lower luminosities, which results in them appearing relatively dark and opaque. Also, shadow detail can be compressed so that many details are not noticeable (or even representable). On the contrary, it would be advantageous for the screen to cut the sun very hard at lower luminance values, while maintaining or only moderately reducing the luminance for the sky and the beach. Thus, an adaptive and non-linear mapping can be performed. However, this approach requires that the display not only consider the display characteristics and especially the luminance variance, but also know the actual absolute luminances that the received HDR pixel values are meant to correspond to. [0111] The approach may, for example, allow an encoding of the HDR image to be performed according to any suitable HDR space, while allowing the image to be represented on any screen, for example one with 1000 nit output, one with 20000 nit output etc. This can be achieved by performing color gamut mapping and this color gamut mapping can specifically be in response to differences between the absolute luminance difference between the reference for encoding and the actual screen on which it is represented. For example, if a merely mapped screen, for example, an HDR range from 50,000nit to, say, 1000nit, is available on the particular screen (with everything compressed linearly), no, most colors would be represented too dark. A better approach might be, for example, to map luminances above, say, 5000 nits, to be very close to screen white (eg, 950 nit). For example, the range from 5000 nits to 50000 nits can be mapped to 950 nits to 1000 nits; 1000 nits to 5000 nits can be mapped to 850 nits to 950 nits; 750 nits to 1000 nits to 750 nits to 850 nits, and the remaining range from 0-750 nits can simply be mapped by itself. [0112] In many embodiments, the image signal may be generated to include a data segment in which pixel image data is provided. For example, the image signal may conform to a standard that specifies specific data segments in which pixel values are included. In some embodiments, these segments may be used for HDR pixel values or may be used for LDR pixel values. Thus, sometimes the data segment may comprise LDR pixel values, and other times the data segment may contain HDR pixel values. In these realizations, the indicator can be used to indicate the type of data that is included in the data segment. Thus, the indicator can be indicative of whether the data segment includes HDR data or LDR data. This approach allows for a very flexible system and, in particular, can facilitate the communication/distribution of HDR data input into existing systems and standards, as the defined, existing LDR data segments can be reused by HDR data, only with the need being an indicator to be introduced. [0113] FIGURE 4 illustrates an example of a collector to process a signal provided by an apparatus, as previously described. In the specific example, the collector is a screen arranged to present the image of the image signal. The collector may specifically be the screen 107 of FIGURE 1. [0114] The screen 107 comprises a receiver 401 that receives the image signal. The image signal comprises a data segment that may contain pixel values of high dynamic variation in terms of N bits, according to one color representation, or may contain pixel values of low dynamic variation (according to another color representation ). The image signal further comprises an indicator which is indicative of whether the data segment comprises high dynamic range pixel values or low dynamic range pixel values. [0115] Receiver 401 is coupled to an extractor 403 which is arranged to extract data from the data segment. Extractor 403 thus retrieves the pixel sample data from the image signal. [0116] Extractor 403 is coupled with a processor to process the pixel sample data. In the example, the processor is a video unit 405 which is further coupled to a display panel 407 and receiver 401. [0117] The video unit 405 receives the pixel sample data from the extractor 403 and the indicator from the receiver 401 and proceeds to generate a signal from the video unit to the display panel 407. [0118] The processing of the video unit 405 is dependent on whether the indicator indicates that the pixel data is for an HDR or LDR image. For example, if the display is an LDR display, it can directly generate trigger signals corresponding to the pixel values for the provided display, the indicator reflects that the pixel values are already LDR values. However, if the indicator reflects that the received pixel values are, in fact, HDR pixel values, the video unit 405 can proceed to perform color gamut mapping and other HDR to LDR conversion. For example, non-linear scaling can be applied to HDR pixel values (for example, corresponding to a registration operation and a cropping operation). This conversion can also take into account the dynamic variation associated with the HDR data received in the conversion adaptation. [0119] Conversely, if the display is an HDR display, it can directly use the pixel values when the indicator indicates that the pixel data is HDR data, and it can perform a color gamut conversion (including luminance highlight) when the indicator indicates that the pixel data is LDR data. [0120] In some embodiments, the system may be arranged to provide efficient encoding of HDR pixel values, so that not all available data bits are used. For example, the data segment can be arranged to provide pixel value data in K-bit-thirds. The data segment can, for example, be a color-enhancing data segment that can provide improved accuracy. For example, the data segment may provide 16-bit LDR RGB data values corresponding to K being equal to 48 bits. However, HDR data can be generated according to an efficient encoding, such as according to a 32-bit RGBE representation. In these realizations, there are an additional 16 bits for each pixel not used by the HDR data. This additional data may, in some cases, be used to provide other information. For example, unused bits can be used to provide additional color information. In other embodiments, the bits can be set to a constant value to provide more efficient encoding, thereby reducing the data rate. [0121] In some embodiments, the apparatus of FIGURE 2 (or 3) may be arranged to generate an image signal comprising a second indicator that indicates that the data segment is used for LDR data even in the case where it is used for LDR data. Thus, this second indicator may indicate that the data of the data segment is conventional LDR data, according to a suitable LDR representation in the case where the data segment does not actually contain such LDR data, but also when contains the HDR data, according to a different color representation. [0122] Thus, in this realization, the image signal may contain a plurality of indicators that may, in some scenarios, be in conflict with each other (or when an indicator can be “wrong”). [0123] The approach may allow some equipment, processing and functionality to use only the second indicator resulting in the data being treated exactly as if they were LDR data. This approach is particularly suitable for components that are not capable of handling HDR data (eg old equipment) but can handle image signals with LDR data. However, at the same time, other equipment, processing and functionality can be arranged to utilize the first indicator to correctly interpret the data segment data and likewise process it as HDR data. These HDR capable components can likewise take full advantage of HDR data. [0124] The approach may be particularly suited to enhancing existing LDR systems and standards to include HDR data. For example, the second indicator could be an original LDR system/standards indicator with the first indicator being a new indicator introduced to the system by enhancing it to include HDR. The new indicator can be provided in an optional image signal section. In this way, existing equipment that, for example, is used for communication, routing, switching, etc. can process the signal in exactly the same way as an LDR signal, based only on the first indicator. So, once the HDR data is encoded in a data segment that can be used for LDR data, and the second indicator corresponds to that, old equipment will know the difference between an HDR signal and an LDR signal. Likewise, existing LDR distribution equipment can be used to distribute HDR data from an HDR source to an HDR sink. However, the HDR capable collector will be willing to look for the first indicator and can likewise determine that the data contained in the data segment is HDR data and not LDR data. [0125] In the following, a specific example of an embodiment in which the image signal is generated, in accordance with the HDMITM standard, will be provided. The realization uses the Deep Color mode of HDMITM to input the HDR content. [0126] HDMITM supports the transmission of video content of various pixel encodings such as YCbCr 4:4:4, YCbCr 4:2:2 and RGB 4:4:4. In standard HDMITM encoding formats, 8 bits are available per component corresponding to pixel values being provided in terms of 24 bits. However, in addition, HDMITM supports the transmission of content with greater color accuracy and/or wider color gamut than normal 8 bits per component. This is called Deep Color mode, and in this mode, HDMITM supports up to 16 bits per component (48 bits per pixel, that is, 48-bit terms). [0127] Deep color mode is based on the frequency that the link is increased with a ratio of pixel intensity/24 (24 bit/pixel = 1.0 x pixel frequency) and an additional control packet is transmitted, which indicates the color depth collector and the packetization of bits (the control packet can therefore be an example of the second indicator mentioned above). This same mechanism in the example is also used for streaming HDR content and no changes to this mechanism are required. [0128] In the example, HDR content is communicated in Deep Color data segments instead of Enhanced Precision LDR data. Communication is achieved by setting the HDMITM communication up to a Deep Color mode, but with an additional indication being introduced to reflect that the is not enhanced LDR data, but rather HDR data. [0129] Furthermore, the pixel encoding does not merely utilize the linear RGB, 16-bits per component approach of Deep Color mode with improved dynamic range, but instead provides the HDR data using pixel encodings of Efficient HDR such as, for example, RGBE, XYZE, LogLuv or, for example, 12-bit single-precision flicker RGB encoding that is also used for the Deep Color mode of HDMITM. This more efficient HDR encoded data is then transmitted using HDMITM's Deep Color transmission mode. [0130] For example, as illustrated in FIGURE 5, a 48-bit term of Deep Color comprises three 16-bit components, corresponding to a linear sample of R, G and B. Encoding HDR data in this linear color representation tends to be suboptimal, and in the example in FIGURE 5, the term 48-bit is in fact used to provide an 8-bit mantissa for each sample of R, G, and B. along with an 8-bit exponent. Or it could be used for 3*12 or 3*14 bit mantissas + 6 bit exponent etc. [0131] The exponent value provides a common scaling factor for the three mantissas, with the scaling factor being equal to 2 for the power of the exponent value, minus 128. The mantissas can be linear and can be provided as values of swing point. This RGBE encoding can provide a much more efficient representation of the very large dynamic range associated with HDR data. In fact, in the example, the encoding uses only 32 bits, thus leaving more bandwidth on the interface, which can, for example, be used for the transmission of 3D or 4k2k formats. [0132] The approach allows for efficient HDR communication using HDMITM and, in fact, needs minimal changes to the HDMITM standard. An easy introduction of HDR to HDMITM can be achieved and, in particular, no new hardware is required. In addition, existing equipment may be able to switch HDR data as it can be treated as Deep Color data. [0133] In the example, the HDMITM interface is set to Deep Color mode, but with an indicator set to indicate that the transmitted content is not Deep Color data, but HDR data instead. The indicator can be provided by setting the appropriately reserved fields in an AVI InfoFrame (Auxiliary Video Information). As another example, the indicator can be provided in the form of a new infoframe that is specifically defined to indicate the transmission of HDR content. Yet, as another example, the HDMITM vendor-specific infoframe can be used to provide the indication. [0134] In more detail, signaling in HDMITM is based on CEA 861-D. CEA861-D defines signaling from collector to source, via E-EDID, and from source to collector, via the AVI infoframe. The AVI infoframe provides structure signaling in color and chroma sampling, overscan and underscan and aspect ratio. [0135] According to some realizations, the HDMI interface is set to indicate the transmission of Deep Color content, but preferably with an HDR pixel encoding in the form of, for example, RGBE (or other efficient HDR representations). ). [0136] A (part of a) AVI infoframe possible, exemplary may be. [0137] Y1 and Y0 indicate the color component sample format and chroma sampling used. For streaming HDR content, this can be set to 00 or 10, indicating RGB and YCbCr 4:4:4. Preferably, the currently reserved value 11 may be used to indicate RGBE or other suitable HDR representation. [0138] C1 and C0 indicate the colorimetry of the transmitted content. For HDR content, this can be set to 00, meaning no data, or 11 to indicate that extended colorimetry is used as is additionally indicated as bits EC0, EC1, and EC2. [0139] ITC indicates whether the content is TI content and this bit is used in conjunction with CN1 and CN0 to indicate to the collector that it should avoid any filtering or analog reconstruction operations. For HDR content, this bit can typically be adjusted. [0140] EC2, EC1 and EC0 indicate the color space, colorimetry of the content. For HDR, one of the widest color gamuts currently defined can be used. Also, the currently reserved fields can be used to indicate other more suitable color spaces for future HDR displays. [0141] Q1 and Q0 indicate the RGB quantization range, for full range of HDR content (10) or 11 (which is currently reserved) could be used to indicate HDR content that is transmitted in Deep Color mode. YQ1 and YQ0 indicate the same, but for YCC quantification. Again, there are two reserved fields that could be used for the purpose of indicating HDR content loaded in Deep Color mode, such as 36-bit YCrCb. [0142] CN1 and CN0 indicate the content type (Graphic, Photo, Cinema, Game) for the TI application and are used in combination with the TI bit. [0143] In order to allow the Collector (the screen) to indicate that it supports HDR content, an extension of the E-EDID specification can be implemented. HDMITM uses E-EDID to signal screen display capabilities back to the playback device. The HDMITM specification, through an HDMITM vendor-specific data block in the E-EDID, already specifies how to indicate support for Deep Color mode transmission. This can be enhanced to also include the ability to support HDR formats such as RGBE or other HDR color encodings. [0144] As another example, an indicator can be included to indicate that the screen supports HDR content and a list of the color encodings it can support, in addition to those already specified in HDMITM, such as; RGBE, XYZE, LogLuv 32, or even EXR. [0145] An extended version of the HDMITM vendor-specific data block with signaling to support HDR could be, for example, as follows: [0146] where “HDMI_HDR_present” indicates that the display supports HDR content and “HDR_color encoding” indicates any additional color encodings supported. [0147] As another example, the approach can be used for a DisplayPort interface. For example, an approach similar to that described for HDMI can be used with the image data of a main content stream containing LDR data, HDR data, or indeed both. An indicator can be provided to indicate the type of image data in the content stream. Control and configuration data (including, in particular, the indicator) may be provided in Secondary Data Packets, and may, in particular, be provided using InfoFrames CEA 861, as described for HDMI. Also, the AUX channel can be used for other control information. In particular, the display's ability to handle HDR data can be communicated using the AUX channel. [0148] Yet, as another example, the approach can be used for Blu-ray DiscTM systems. [0149] It will be appreciated that the described system can be used with many different types of content creation, provision and consumption, including, for example, consumer systems. [0150] Figure 6 schematically presents an example of some of the devices that may be present on the creation (transmission) side to be used to create a good color description signal. In the example, the devices are integrated with a classic celluloid film camera (note that the digital assist representation of the scene will only be completely [as for the pixel values of analog vs. digital recordings] linkable to the celluloid portrait actually captured if film material calibration models are incorporated to map the two (however development, then, is still an unknown variable with which it can be reproduced in a supplementary manner), but even without it, digital recording can still produce very valuable parallel information, for example, if it is geometrically co-registered with the viewport captured on celluloid, regions can be defined, and in addition to the developed texture values captured on celluloid, one can encode, for example, values visualization and real scene, linear or digital capture medium), because the technician in the subject will understand how to transpose these components to the environment of a large color changer or a transcoder that does the same, for example, for an old portrait of O Fat e o Magro. [0151] FIGURE 6 presents, affixed to the camera 601, a digital screen 603 (which, for example, receives a feed from a CCD co-registered with the camera lens). However, the 604 connection does not have to be fixed but can also be a transmitter for several separate screens (eg one for the camera operator and one in the director's overview tower). On screen 603, the camera operator or cinematographer can sketch, for example, a region 650 that they know they have calibrated with their stage lighting as a dark part of the image, which can be done, for example, as a 608 light pen or other means of user interface input [we only present an example, as we believe the technical expert can understand well what types of systems allow a user to give feedback on a displayed image]. The screen 603 may store added information in a memory 606 (e.g., a detachable memory card), or communicate via a transmission system 605. It may also receive additional information from a scene analysis device at the shooting location 620 (which can be simply a light meter or even a spatial sampling spectrometer) through its transmission system 621, which can also transmit to the final data accumulation site (ie 640). In addition, 630 on-scene meters (i.e., spot lighting meters to measure how the actor's face is lit, especially when with highly variable lighting; sphere systems aimed at distributing lighting around; etc.) data to any part of the system via its 631 transmission system. The receiving screen may then attempt to reproduce light at its original brightness or at least a fraction (or function) thereof, typically according to some psychovisual model to create a similar look or an artistic look etc. All data is accumulated in a data accumulation apparatus 640 with internal memory, typically a computer (with transmission system 641). [0152] The system, illustrated in FIGURE 6, can therefore, for example, be used by an operator to generate an LDR image by manual color grading/tone mapping (and, also, an HDR image can be composited or at least a partial block, therefore). The resulting LDR image can then be encoded and represented in the first pixel portrait. The system can even automatically determine parameters to generate the HDR image. Alternatively, the operator may also use the system of FIGURE 6 to generate the HDR extension data, for example, in a semi-automatic process. [0153] Figure 7 shows an exemplary image decoding and display system on the receiving side, for example in a consumer living room (the skilled person will understand a similar system in accordance with the principles of our invention look like, for example, a digital cinema). One embodiment of color representation image processing apparatus 701 is a set top box decoder assembly (which may correspond to the content processing device 103 of FIGURE 1) with built-in Blu-ray player (but this may also be , e.g. a laptop computer or portable device such as a cell phone, etc., i.e. the 701 device can be as small as a plug-in card [as long as it is able to read the regime specifications, and allow color processing with it] or as large as a professional cinema transcoding studio) that is capable of receiving a Blu-ray 702 with the entire LDR/HDR-extension image signal encoded on it, i.e. both the first portrait with LDR and the second portrait with the included HDR extension data. [0154] The apparatus may, as another example, receive the signals through a first connection 703, for example to a television signal transmission cable (or antenna, or input for digital photos on a memory card, etc.). ; picture signal can also mean, in a variety of ways, for example an encoded signal, television standard or raw picture file etc.) 704 which carries the input signals (typically, encoded by compression). In some embodiments, the two pictures could be provided via two paths, for example, the HDR description data could come in another way, via a second connector 704, for example, connected to the internet 705. [0155] The apparatus 701 has an IC that has at least one extractor 711 arranged to extract the data and either produce it directly or convert it to new values more suitable for image processing controlled by an image processing unit 712 This can be accomplished as simply as applying some tone reproduction transforms to pixels corresponding to the special regime to be represented, or having complex algorithms, for example, typically corresponding to any of the algorithms that can be applied on the authoring side. , for example, a segmentation and/or tracking algorithm/unit. [0156] Reader 701 can output its intended representation IR' image, enhanced for screen/television over a video cable 720 (e.g. HDMI), but once the television can do (or be asked to do) additional processing (in its image analysis and/or processing IC 731), a second connection (cable or wireless) 721 may be present for CS control signals (which may comprise any signal data and/or control derived therefrom). Typically, these additional control signals can be added over the video cable, for example by updating the HDMI (wireless) protocol. Apparatus 723 may also send color signals over connection 723 to a second ambient color display 740, which may also obtain its intended representation color input signals via display 730. The exemplary display is with a backlight of LED 732, ideal for HDR representation. Ambient metering devices such as the 780 metering device may be present, for example an inexpensive camera that can check the environment around the television, ambient light, reflections on the television's faceplate, grayscale visibility calibration etc., and they can communicate this information to instrument 701 and/or display 730. [0157] The algorithm components disclosed in this text can (in whole or in part) be realized, in practice, as hardware (e.g. parts of a specific IC application) or as software that runs on a special digital signal processor or a processor generic etc. [0158] It should be understandable to the technician on the subject, from our presentation, that the components can be optional improvements and can be carried out in combination with other components and how the (optional) steps of the methods correspond to the respective means of apparatus and vice versa. back The word “apparatus” in this application is used in its broadest sense, namely, a group of means that allows the achievement of a particular objective and can therefore, for example, be (a small part of) a IC, or a dedicated engine (such as an engine with a screen), or part of a networked system, etc. “Arrangement” is also intended to be used in the broadest sense, thus it may comprise, inter alia, a single apparatus, a part of an apparatus, a collection of (parts of) cooperating apparatus, etc. [0159] The denotation of computer program product should be understood to encompass any physical realization of a collection of commands that enable a generic or special purpose processor, after a series of loading steps (which may include intermediate conversion steps, such as translation into an intermediate language and a final processor language) to enter commands into the processor and to perform any of the characteristic functions of an invention. In particular, the computer program product may be realized as data on a loader, such as a disk or tape, data present in memory, data traveling over a wired or wireless network connection, or paper program code. In addition to the program code, the characteristic data required for the program can also be realized as a computer program product. Some of the steps necessary for the operation of the method may already be present in the functionality of the processor, as opposed to described in the computer program product, such as data input and output steps. [0160] It will be appreciated that the above description, for clarity, has described embodiments of the invention with reference to different functional circuits, units and processors. However, it will be apparent that any suitable distribution of functionality between different functional circuits, units or processors can be utilized without detracting from the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controllers. Accordingly, references to specific functional units or circuits are to be seen only as references to adequate means of providing the described functionality, rather than as indicative of a strict logical and physical structure or organization. [0161] The invention may be implemented in any suitable manner, including hardware, software, firmware or any combination thereof. The invention may optionally be implemented, at least partially, as computer software that listens on one or more data processors and/or digital signal processors. Elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable manner. In fact, functionality can be implemented in a single unit, in a plurality of units, or as part of other functional units. As such, the invention may be implemented in a single unit or may be physically and functionally distributed across different units, circuits and processors. [0162] While the present invention has been described in connection with certain embodiments, it is not intended to be limited to the specific form set forth herein. On the contrary, the scope of the present invention is limited only by the appended claims. Furthermore, while an aspect may appear to be described in connection with particular embodiments, one skilled in the art will recognize that various aspects of the described embodiments may be combined in accordance with the invention. In the claims, the term comprising does not exclude the presence of other elements or steps. [0163] Furthermore, although individually listed, a plurality of means, elements, circuits or method steps may be implemented, for example, by a single circuit, unit or processor. Additionally, although individual aspects may be included in different claims, they may possibly be advantageously combined, and inclusion in different claims does not imply that a combination of aspects is not feasible and/or advantageous. Also, the inclusion of an aspect in a category of claims does not imply a limitation to that category, but rather indicates that the aspect is equally applicable to other categories of claim, as appropriate. Furthermore, the order of aspects in the claims does not imply any specific order in which the aspects must be worked on, and in particular the order of the individual steps in a method claim does not imply that the steps have to be performed in that order. Instead, the steps can be performed in any suitable order. Furthermore, singular references do not exclude a plurality. Thus, references to “one”, “one”, “first”, “second”, etc. do not preclude plurality. Reference signs are provided merely as an example of clarification and should not be construed as limiting the scope of the claims in any way.
权利要求:
Claims (19) [0001] 1. APPARATUS FOR GENERATING AN PICTURE SIGNAL, in which pixels are encoded in terms of n bits, encoding at least one luma per pixel, the apparatus being characterized in that it comprises: a receiver (201) for obtaining pixel values of high variation dynamics according to a first color representation in terms of M bits; a first generator (203) for including the high dynamically varying pixel values in the image signal in terms of N bits, in accordance with a second color representation; and a second generator (205) for including in the image signal an indicator of a type of HDR encoding by which high dynamically varying pixel values are encoded, wherein the indicator comprises an indication of a display luminance associated with the second representation by heart. [0002] 2. APPLIANCE according to claim 1, characterized in that the first color representation is different from the second color representation. [0003] APPARATUS according to claim 2, characterized in that it additionally comprises a transforming unit (301) for transforming the pixel values of high dynamic variation from the first color representation to the second color representation. [0004] 4. APPARATUS according to claim 2 or 3, wherein the transformation is characterized in that it comprises a compression of the terms of M input bits in terms of N bits, where M is greater than N. [0005] 5. APPLIANCE according to claim 4, wherein the compression comprises using a different quantification scheme for pixel values, according to the second color representation, which for pixel values, of according to the first color representation. [0006] 6. APPLIANCE according to claim 1, characterized in that the first color representation is the same as the second color representation. [0007] APPARATUS according to claim 1 or 2, wherein the indicator is characterized in that it comprises an absolute white indication of the second color representation. [0008] 8. APPARATUS, according to claim 1, wherein the indicator is characterized by comprising information on how exactly all luma or color values along the range of codable colors in the representation of M bits are distributed along the codable range of the N-bit sign. [0009] 9. APPARATUS according to claim 1 or 2, characterized in that the first color representation employs a separate color value for each color component of the first color representation, and the second color representation employs a set of color values for each color component of the second color representation along with a common exponential factor. [0010] Apparatus according to claim 1 or 2, wherein the image signal is characterized in that it comprises a segment for pixel image data, and the first generator (201) is arranged to alternatively include low-variance pixel values. dynamic range or the high dynamic range pixel values according to the second color representation in the segment, and the indicator is arranged to indicate whether the first segment comprises low dynamic range color values or high dynamic range color values . [0011] Apparatus according to claim 10, characterized in that the second generator (203) is arranged to additionally include a second indicator in the image signals, the second indicator declaring that the segment is encoding low dynamically varying pixel values as much as the segment comprises low dynamically varying pixel values such as when the segment comprises high dynamically varying pixel values. [0012] 12. APPLIANCE according to claim 11, characterized in that the first generator (201) is arranged to include the pixel values of high dynamic variation in a segment of Deep Color data, in accordance with the HDMI standard. [0013] 13. APPARATUS according to claim 11, characterized in that the second generator (201) is arranged to include the indicator in an InfoFrame of Auxiliary Video Information. [0014] 14. METHOD OF GENERATING AN IMAGE SIGNAL, in which pixels are encoded in terms of N bits, encoding at least one luma per pixel, the method being characterized by comprising the steps of: obtaining pixel values of high dynamic variation, according to the first color representation in terms of M bits; including the high dynamic range pixel values in the image signal in terms of N bits, in accordance with the second color representation; and including, in the image signal, an indicator of an encoding type of HDR by which high dynamically varying pixel values are encoded, wherein the indicator comprises an indication of a display luminance associated with the second color representation. [0015] 15. APPARATUS FOR PROCESSING AN PICTURE SIGNAL, the apparatus being characterized in that it comprises: a receiver (401) for receiving the picture signal, a data segment of the picture signal comprising one of the pixel values of high dynamic variation in terms of N bits, according to the first color representation, and low dynamically varying pixel values, according to the second color representation, and to receive an indicator indicative of a type of HDR encoding by which the pixel values of high dynamic range are encoded, wherein the indicator comprises an indication of a display luminance associated with the second color representation; an extractor (403) for extracting data from the data segment; and a processor (405) arranged to process the data of the data segment as pixel values of high dynamic variation dependent on the value of the indicator. [0016] 16. APPLIANCE according to claim 15, characterized in that the processor (405) is arranged to adapt its processing to map the N-bit terms to display presentable output signals, depending on the encoding used in the N-bit terms, as indicated. by the indicator. [0017] 17. APPARATUS according to claim 15, characterized in that the processor (405) is arranged to apply a color transformation that implements at least one mapping of gray values along a first luminance variation associated with N-bit terms, for a second luminance range associated with a display (407), in which the color transformation is dependent on absolute white, or any white level indication that describes a scenario luminance level of maximum luminance codifiable with the terms of N bits. [0018] 18. APPARATUS according to claim 15, wherein the image signal is in accordance with an HDMI standard, and the apparatus is characterized in that it further comprises means for transmitting an indication of the ability to process high dynamically varying pixel values into an HDMI vendor-specific data block. [0019] 19. METHOD OF PROCESSING AN IMAGE SIGNAL, the method being characterized by comprising: receiving the image signal, a data segment of the image signal comprising one of the pixel values of high dynamic variation in terms of N bits, according to having a first color representation, and low dynamically varying pixel values according to a second color representation; receiving an indicator indicative of a type of HDR encoding by which high dynamically varying pixel values are encoded, wherein the indicator comprises an indication of a display luminance associated with the second color representation; and processing the data from the data segment as either high dynamically varying pixel values or as low dynamically varying pixel values, depending on the value of the indicator.
类似技术:
公开号 | 公开日 | 专利标题 BR112013028556B1|2022-01-04|DEVICE AND METHOD FOR GENERATING AN IMAGE SIGNAL AND DEVICE AND METHOD FOR PROCESSING AN IMAGE SIGNAL US20210142453A1|2021-05-13|Apparatus and method for dynamic range transforming of images US9230338B2|2016-01-05|Graphics blending for high dynamic range video US9584786B2|2017-02-28|Graphics blending for high dynamic range video US10891722B2|2021-01-12|Display method and display device US9292940B2|2016-03-22|Method and apparatus for generating an image coding signal WO2015198554A1|2015-12-30|Conversion method and conversion apparatus EP3022902A1|2016-05-25|Method and apparatus to create an eotf function for a universal code mapping for an hdr image, method and process to use these images JP6946325B2|2021-10-06|Methods and devices for encoding high dynamic range pictures, corresponding coding methods and devices CN112215760A|2021-01-12|Image processing method and device JP2011527851A|2011-11-04|Transform images encoded using different standards
同族专利:
公开号 | 公开日 WO2012153224A1|2012-11-15| JP2014519620A|2014-08-14| EP2707868A1|2014-03-19| US20140079113A1|2014-03-20| RU2013154551A|2015-06-20| EP3627490A1|2020-03-25| KR20140043742A|2014-04-10| CN103582911A|2014-02-12| US10097822B2|2018-10-09| CN103582911B|2017-01-18| JP6234920B2|2017-11-22| KR102061349B1|2019-12-31| MX2013012976A|2013-12-06| EP3627490B1|2021-09-29| BR112013028556A2|2017-01-17| KR102135841B1|2020-07-22| KR20190126191A|2019-11-08| RU2611978C2|2017-03-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR2643531B1|1989-02-21|1996-04-26|Thomson Csf|INFORMATION COMPRESSION METHOD AND DEVICE FOR COMPATIBLE DECODING OF A FAMILY OF INCREASING RESOLUTIONS TELEVISION SIGNALS| US6118820A|1998-01-16|2000-09-12|Sarnoff Corporation|Region-based information compaction as for digital images| KR100965881B1|2003-10-10|2010-06-24|삼성전자주식회사|System for encoding video data and system for decoding video data| US8218625B2|2004-04-23|2012-07-10|Dolby Laboratories Licensing Corporation|Encoding, decoding and representing high dynamic range images| KR101329173B1|2004-11-01|2013-11-14|테크니컬러, 인크.|Method and system for mastering and distributing enhanced color space content| CN101331771B|2006-05-16|2010-07-28|索尼株式会社|Communication system, transmission device, reception device, communication method| US7885469B2|2006-05-22|2011-02-08|Microsoft Corporation|Encoded high dynamic range textures| KR101521041B1|2006-05-25|2015-05-15|톰슨 라이센싱|Method and system for weighted encoding| CN101193202A|2006-12-22|2008-06-04|陈珉|Method for display highly dynamic image on the traditional output device| JP5411147B2|2007-10-16|2014-02-12|トムソンライセンシング|Method and apparatus for artifact removal for bit depth scalability| BRPI0822105A2|2008-01-31|2015-06-30|Thomson Licensing|"Method and system for defining and transmitting examination data via a high definition multimedia interface."| DE102008007204B4|2008-02-01|2018-04-19|Robert Bosch Gmbh|eductor| US20090322800A1|2008-06-25|2009-12-31|Dolby Laboratories Licensing Corporation|Method and apparatus in various embodiments for hdr implementation in display devices| JP5690267B2|2008-08-22|2015-03-25|トムソン ライセンシングThomson Licensing|Method and system for content delivery| WO2010104624A2|2009-03-10|2010-09-16|Dolby Laboratories Licensing Corporation|Extended dynamic range and extended dimensionality image signal conversion| WO2010105036A1|2009-03-13|2010-09-16|Dolby Laboratories Licensing Corporation|Layered compression of high dynamic range, visual dynamic range, and wide color gamut video| CN101951510B|2010-07-26|2012-07-11|武汉大学|High dynamic range compression method based on multiscale DoG filter| CN103210418B|2010-11-23|2016-08-17|杜比实验室特许公司|The content metadata of high dynamic range images strengthens| JP5802666B2|2010-12-22|2015-10-28|パナソニック株式会社|Image encoding device, image decoding device, image encoding method, and image decoding method| US10200689B2|2011-03-04|2019-02-05|Qualcomm Incorporated|Quantized pulse code modulation in video coding| WO2012147010A1|2011-04-28|2012-11-01|Koninklijke Philips Electronics N.V.|Method and apparatus for generating an image coding signal|CN109064433A|2013-02-21|2018-12-21|皇家飞利浦有限公司|Improved HDR image coding and decoding methods and equipment| JP6104411B2|2013-02-21|2017-03-29|ドルビー ラボラトリーズ ライセンシング コーポレイション|Appearance mapping system and apparatus for overlay graphics synthesis| US8866975B1|2013-05-02|2014-10-21|Dolby Laboratories Licensing Corporation|Backwards-compatible delivery of digital cinema content with higher dynamic range and related preprocessing and coding methods| EP3013040A4|2013-06-20|2016-12-28|Sony Corp|Reproduction device, reproduction method, and recording medium| JP2015005878A|2013-06-20|2015-01-08|ソニー株式会社|Reproduction device, reproduction method and recording medium| TWI711310B|2013-06-21|2020-11-21|日商新力股份有限公司|Transmission device, high dynamic range image data transmission method, reception device, high dynamic range image data reception method and program| JP2015008361A|2013-06-24|2015-01-15|ソニー株式会社|Reproducing apparatuses, reproducing method and recording medium| JP2015008360A|2013-06-24|2015-01-15|ソニー株式会社|Reproducing apparatuses, reproducing method and recording medium| CA2914986A1|2013-06-24|2014-12-31|Sony Corporation|Reproduction device, reproduction method, and recording medium| TWI630820B|2013-07-19|2018-07-21|新力股份有限公司|File generation device, file generation method, file reproduction device, and file reproduction method| TWI630821B|2013-07-19|2018-07-21|新力股份有限公司|File generation device, file generation method, file reproduction device, and file reproduction method| TWI632810B|2013-07-19|2018-08-11|新力股份有限公司|Data generating device, data generating method, data reproducing device, and data reproducing method| CN105556606B|2013-09-27|2020-01-17|索尼公司|Reproducing apparatus, reproducing method, and recording medium| KR101797505B1|2013-11-13|2017-12-12|엘지전자 주식회사|Broadcast signal transmission method and apparatus for providing hdr broadcast service| WO2015076608A1|2013-11-21|2015-05-28|엘지전자 주식회사|Video processing method and video processing apparatus| WO2015103646A1|2014-01-06|2015-07-09|Panamorph, Inc.|Image processing system and method| CA2936313A1|2014-01-24|2015-07-30|Sony Corporation|Transmission device, transmission method, reception device, and reception method| KR102285955B1|2014-02-07|2021-08-05|소니그룹주식회사|Transmission device, transmission method, reception device, reception method, display device, and display method| EP3111644A1|2014-02-25|2017-01-04|Apple Inc.|Adaptive transfer function for video encoding and decoding| EP3145206B1|2014-05-15|2020-07-22|Sony Corporation|Communication apparatus, communication method, and computer program| WO2015174026A1|2014-05-16|2015-11-19|パナソニックIpマネジメント株式会社|Conversion method and conversion device| WO2015180854A1|2014-05-28|2015-12-03|Koninklijke Philips N.V.|Methods and apparatuses for encoding an hdr images, and methods and apparatuses for use of such encoded images| EP2953119A1|2014-06-05|2015-12-09|Martin Professional ApS|Video display device with color purity control| JP5948619B2|2014-06-10|2016-07-06|パナソニックIpマネジメント株式会社|Display system, display method, and display device| CN106105177B|2014-06-10|2019-09-27|松下知识产权经营株式会社|Transform method and converting means| US20170064242A1|2014-06-13|2017-03-02|Sony Corporation|Transmission device, transmission method, reception device, and reception method| WO2015194101A1|2014-06-16|2015-12-23|パナソニックIpマネジメント株式会社|Playback method and playback apparatus| MX357970B|2014-06-20|2018-08-01|Panasonic Ip Man Co Ltd|Playback method and playback apparatus.| CN105493490B|2014-06-23|2019-11-29|松下知识产权经营株式会社|Transform method and converting means| EP3713247A1|2014-06-26|2020-09-23|Panasonic Intellectual Property Management Co., Ltd.|Data output device, data output method, and data generation method| CN111901599A|2014-06-27|2020-11-06|松下知识产权经营株式会社|Reproducing apparatus| JP6466258B2|2014-08-07|2019-02-06|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM| JP2016538745A|2014-08-08|2016-12-08|エルジー エレクトロニクス インコーポレイティド|Video data processing method and apparatus for display adaptive video playback| CN107005720B|2014-08-08|2020-03-06|皇家飞利浦有限公司|Method and apparatus for encoding HDR images| KR102279842B1|2014-08-08|2021-07-21|코닌클리케 필립스 엔.브이.|Methods and apparatuses for encoding hdr images| MX367898B|2014-09-22|2019-09-11|Panasonic Ip Man Co Ltd|Playback method and playback device.| JP5995129B2|2014-09-22|2016-09-21|パナソニックIpマネジメント株式会社|Reproduction method and reproduction apparatus| EP3217656B1|2014-11-04|2021-03-03|Panasonic Intellectual Property Corporation of America|Reproduction method, reproduction device, and program| MX2017005983A|2014-11-10|2017-06-29|Koninklijke Philips Nv|Method for encoding, video processor, method for decoding, video decoder.| WO2016080233A1|2014-11-17|2016-05-26|ソニー株式会社|Transmission device, high dynamic range image data transmission method, receiving device, high dynamic range image data receiving method and program| JP6463179B2|2015-03-17|2019-01-30|キヤノン株式会社|Signal processing apparatus, signal processing method, and imaging apparatus| US10097886B2|2015-03-27|2018-10-09|Panasonic Intellectual Property Management Co., Ltd.|Signal processing device, record/replay device, signal processing method, and program| KR102310241B1|2015-04-29|2021-10-08|삼성전자주식회사|Source device, controlling method thereof, sink device and processing method for improving image quality thereof| CN106331692B|2015-06-15|2018-01-23|冠捷投资有限公司|The determination methods of the quantizing range of digitized video| EP3136375B1|2015-08-31|2020-07-08|Lg Electronics Inc.|Image display apparatus| US9767543B2|2015-09-22|2017-09-19|Samsung Electronics Co., Ltd.|Method and apparatus for enhancing images via white pop-out| US10587852B2|2015-10-06|2020-03-10|Lg Electronics Inc.|Broadcast signal transmission device, broadcast signal reception device, broadcast signal transmission method, and broadcast signal reception method| DE102015220993A1|2015-10-27|2017-04-27|Zf Friedrichshafen Ag|Assembly unit for an electric drive unit within a drive train of a vehicle| JP6710970B2|2015-12-28|2020-06-17|ソニー株式会社|Transmission device and transmission method| US10757385B2|2016-08-24|2020-08-25|Qualcomm Incorporated|Color gamut adaptation with feedback channel| WO2018070822A1|2016-10-14|2018-04-19|엘지전자 주식회사|Data processing method and device for adaptive image playing| US10542187B2|2017-03-01|2020-01-21|Canon Kabushiki Kaisha|Image processing apparatus and image processing method| JP6659178B2|2017-03-01|2020-03-04|キヤノン株式会社|Image processing apparatus and image processing method| US9986200B1|2017-05-11|2018-05-29|Novatek Microelectronics Corp.|Method and video conversion system of updating video setting| KR20190109204A|2018-03-16|2019-09-25|엘지전자 주식회사|Signal processing device and image display apparatus including the same| JP6904470B2|2019-08-30|2021-07-14|ソニーグループ株式会社|Playback device, display device, information processing method|
法律状态:
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-10-29| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-07-27| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2021-10-19| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-01-04| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 27/04/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 EP11165491|2011-05-10| EP11165491.9|2011-05-10| PCT/IB2012/052102|WO2012153224A1|2011-05-10|2012-04-27|High dynamic range image signal generation and processing| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|